Adding `safetensors` variant of this model

#4
by aisltnab - opened
LICENSE.txt CHANGED
@@ -1,41 +1,126 @@
1
- Nexusflow.ai License Terms
 
2
 
3
- NexusRaven-V2 Version Release Date: December 5, 2023
 
4
 
5
- “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Nexusflow Materials set forth herein.
 
 
6
 
7
- “Documentation” means the specifications, manuals and documentation accompanying NeuxsRaven-V2 distributed by Nexusflow at https://huggingface.co/Nexusflow/NexusRaven-V2-13B, if any.
 
 
 
 
8
 
9
- “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
 
 
 
 
10
 
11
- “NexusRaven-V2” means the large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing made available by Nexusflow at https://huggingface.co/Nexusflow/NexusRaven-V2-13B.
 
12
 
13
- “Nexusflow Materials” means, collectively, Nexusflow’s proprietary NexusRaven-V2 and Documentation (and any portion thereof) made available under this Agreement.
 
 
14
 
15
- “Nexusflow” or “we” means Nexusflow.ai Inc.
 
16
 
17
- By using or distributing any portion or element of the Nexusflow Materials, you agree to be bound by this Agreement.
18
- 1. License Rights and Redistribution.
19
- a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Nexusflow’s intellectual property or other rights owned by Nexusflow embodied in the Nexusflow Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Nexusflow Materials.
20
- b. Redistribution and Use.
21
- i. If you distribute or make the Nexusflow Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party.
22
- ii. If you receive Nexusflow Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 1 of this Agreement will not apply to you.
23
- iii. You must retain in all copies of the Nexusflow Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “NexusRaven-V2 is licensed under the Nexusflow License, Copyright © Nexusflow.ai Inc. All Rights Reserved.”
24
- iv. Your use of the Nexusflow Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to Nexusflow terms and policies (if any), which are hereby incorporated by reference into this Agreement. The Nexusflow Materials are derived from Llama 2 as offered by Meta Platforms Ireland Limited or Meta Platforms, Inc., and you further agree that your use of the Nexusflow Materials shall be subject to the applicable terms and conditions of the Llama 2 Community License Agreement, available at https://ai.meta.com/llama/license/.
25
- v. You will not use the Nexusflow Materials or any output or results of the Nexusflow Materials to improve any other large language model (excluding NexusRaven-V2 or derivative works thereof).
26
 
27
- 2. Additional Commercial Terms. If, on the NexusRaven-V2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 50 million monthly active users in the preceding calendar month, you must request a license from Nexusflow, which Nexusflow may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Nexusflow otherwise expressly grants you such rights.
 
 
 
 
 
 
28
 
29
- 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE NEXUSFLOW MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE NEXUSFLOW MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE NEXUSFLOW MATERIALS AND ANY OUTPUT AND RESULTS.
 
 
 
 
 
30
 
31
- 4. Limitation of Liability. IN NO EVENT WILL NEXUSFLOW, ITS LICENSORS OR AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF NEXUSFLOW OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
 
 
 
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  5. Intellectual Property.
34
- a. No trademark licenses are granted under this Agreement, and in connection with the Nexusflow Materials, neither Nexusflow nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and using the Nexusflow Materials.
35
- b. Subject to Nexusflow’s ownership of Nexusflow Materials and derivatives made by or for Nexusflow (and any rights retained therein by its licensors to the foregoing), with respect to any derivative works and modifications of the Nexusflow Materials that are made by you, as between you and Nexusflow, you are and will be the owner of such derivative works and modifications.
36
- c. You will indemnify and hold harmless Nexusflow from and against any claim by any third party arising out of or related to your use of the Nexusflow Materials.
37
 
38
- 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Nexusflow Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Nexusflow may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Nexusflow Materials. Sections 3, 4, 5.c. (the last sentence) and 7 shall survive the termination of this Agreement.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
 
 
 
 
41
 
 
1
+ LLAMA 2 COMMUNITY LICENSE AGREEMENT
2
+ Llama 2 Version Release Date: July 18, 2023
3
 
4
+ "Agreement" means the terms and conditions for use, reproduction, distribution and
5
+ modification of the Llama Materials set forth herein.
6
 
7
+ "Documentation" means the specifications, manuals and documentation
8
+ accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-
9
+ libraries/llama-downloads/.
10
 
11
+ "Licensee" or "you" means you, or your employer or any other person or entity (if
12
+ you are entering into this Agreement on such person or entity's behalf), of the age
13
+ required under applicable laws, rules or regulations to provide legal consent and that
14
+ has legal authority to bind your employer or such other person or entity if you are
15
+ entering in this Agreement on their behalf.
16
 
17
+ "Llama 2" means the foundational large language models and software and
18
+ algorithms, including machine-learning model code, trained model weights,
19
+ inference-enabling code, training-enabling code, fine-tuning enabling code and other
20
+ elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-
21
+ libraries/llama-downloads/.
22
 
23
+ "Llama Materials" means, collectively, Meta's proprietary Llama 2 and
24
+ Documentation (and any portion thereof) made available under this Agreement.
25
 
26
+ "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you
27
+ are an entity, your principal place of business is in the EEA or Switzerland) and Meta
28
+ Platforms, Inc. (if you are located outside of the EEA or Switzerland).
29
 
30
+ By clicking "I Accept" below or by using or distributing any portion or element of the
31
+ Llama Materials, you agree to be bound by this Agreement.
32
 
33
+ 1. License Rights and Redistribution.
 
 
 
 
 
 
 
 
34
 
35
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
36
+ transferable and royalty-free limited license under Meta's intellectual property or
37
+ other rights owned by Meta embodied in the Llama Materials to use, reproduce,
38
+ distribute, copy, create derivative works of, and make modifications to the Llama
39
+ Materials.
40
+
41
+ b. Redistribution and Use.
42
 
43
+ i. If you distribute or make the Llama Materials, or any derivative works
44
+ thereof, available to a third party, you shall provide a copy of this Agreement to such
45
+ third party.
46
+ ii. If you receive Llama Materials, or any derivative works thereof, from
47
+ a Licensee as part of an integrated end user product, then Section 2 of this
48
+ Agreement will not apply to you.
49
 
50
+ iii. You must retain in all copies of the Llama Materials that you
51
+ distribute the following attribution notice within a "Notice" text file distributed as a
52
+ part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License,
53
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved."
54
 
55
+ iv. Your use of the Llama Materials must comply with applicable laws
56
+ and regulations (including trade compliance laws and regulations) and adhere to the
57
+ Acceptable Use Policy for the Llama Materials (available at
58
+ https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into
59
+ this Agreement.
60
+
61
+ v. You will not use the Llama Materials or any output or results of the
62
+ Llama Materials to improve any other large language model (excluding Llama 2 or
63
+ derivative works thereof).
64
+
65
+ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the
66
+ monthly active users of the products or services made available by or for Licensee,
67
+ or Licensee's affiliates, is greater than 700 million monthly active users in the
68
+ preceding calendar month, you must request a license from Meta, which Meta may
69
+ grant to you in its sole discretion, and you are not authorized to exercise any of the
70
+ rights under this Agreement unless or until Meta otherwise expressly grants you
71
+ such rights.
72
+
73
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE
74
+ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE
75
+ PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
76
+ EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
77
+ WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR
78
+ FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
79
+ FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
80
+ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR
81
+ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
82
+
83
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE
84
+ LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT,
85
+ NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS
86
+ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,
87
+ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
88
+ IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF
89
+ ANY OF THE FOREGOING.
90
+
91
  5. Intellectual Property.
 
 
 
92
 
93
+ a. No trademark licenses are granted under this Agreement, and in
94
+ connection with the Llama Materials, neither Meta nor Licensee may use any name
95
+ or mark owned by or associated with the other or any of its affiliates, except as
96
+ required for reasonable and customary use in describing and redistributing the
97
+ Llama Materials.
98
+
99
+ b. Subject to Meta's ownership of Llama Materials and derivatives made by or
100
+ for Meta, with respect to any derivative works and modifications of the Llama
101
+ Materials that are made by you, as between you and Meta, you are and will be the
102
+ owner of such derivative works and modifications.
103
+
104
+ c. If you institute litigation or other proceedings against Meta or any entity
105
+ (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
106
+ Materials or Llama 2 outputs or results, or any portion of any of the foregoing,
107
+ constitutes infringement of intellectual property or other rights owned or licensable
108
+ by you, then any licenses granted to you under this Agreement shall terminate as of
109
+ the date such litigation or claim is filed or instituted. You will indemnify and hold
110
+ harmless Meta from and against any claim by any third party arising out of or related
111
+ to your use or distribution of the Llama Materials.
112
+
113
+ 6. Term and Termination. The term of this Agreement will commence upon your
114
+ acceptance of this Agreement or access to the Llama Materials and will continue in
115
+ full force and effect until terminated in accordance with the terms and conditions
116
+ herein. Meta may terminate this Agreement if you are in breach of any term or
117
+ condition of this Agreement. Upon termination of this Agreement, you shall delete
118
+ and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the
119
+ termination of this Agreement.
120
 
121
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and
122
+ construed under the laws of the State of California without regard to choice of law
123
+ principles, and the UN Convention on Contracts for the International Sale of Goods
124
+ does not apply to this Agreement. The courts of California shall have exclusive
125
+ jurisdiction of any dispute arising out of this Agreement.
126
 
README.md CHANGED
@@ -1,11 +1,9 @@
1
  ---
2
- license: other
3
  base_model: codellama/CodeLlama-13b-Instruct-hf
4
  model-index:
5
  - name: NexusRaven-13B
6
  results: []
7
- tags:
8
- - function calling
9
  ---
10
  # NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling
11
  <p align="center">
@@ -37,13 +35,7 @@ Please checkout the following links!
37
 
38
  ## NexusRaven-V2 model usage
39
 
40
- NexusRaven-V2 accepts a list of python functions.
41
-
42
- These python functions can do anything (including sending GET/POST requests to external APIs!).
43
-
44
- The two requirements include the python function signature and the appropriate docstring to generate the function call.
45
-
46
- NexusRaven-V2 also does best on functions with arguments, so please always only provide functions that require arguments to raven.
47
 
48
  ### NexusRaven-V2's Capabilities
49
 
@@ -51,32 +43,11 @@ NexusRaven-V2 is capable of generating deeply nested function calls, parallel fu
51
 
52
  ### Quick Start Prompting Guide
53
 
54
- Please refer to our notebook, [How-To-Prompt.ipynb](https://colab.research.google.com/drive/19JYixRPPlanmW5q49WYi_tU8rhHeCEKW?usp=sharing), for more advanced tutorials on using NexusRaven-V2!
55
-
56
- 1. When giving docstrings to Raven, please provide well-indented, detailed, and well-written docstrings as this can help accuracy.
57
- 2. Raven does better when all functions provided to it has arguments, either required or optional, (i.e. ```func(dummy_arg)``` is preferred over ```func()```) as this can help accuracy.
58
- 3. We strongly recommend to set sampling to False when prompting NexusRaven-V2.
59
- 4. We strongly recommend a very low temperature (~0.001).
60
- 5. We strongly recommend following the prompting style below.
61
-
62
- When handling irrelevant user queries, users have noticed that specifying a "no-op" function with arguments work best. For example, something like this might work:
63
- ```python
64
- def no_relevant_function(user_query : str):
65
- """
66
- Call this when no other provided function can be called to answer the user query.
67
-
68
- Args:
69
- user_query: The user_query that cannot be answered by any other function calls.
70
- """
71
- ```
72
-
73
- Please ensure to provide an argument to this function, as Raven works best on functions with arguments.
74
 
75
- For parallel calls, due to the model being targeted for industry use, you can "enable" parallel calls by adding this into the prompt:
76
- ```python
77
- "Setting: Allowed to issue multiple calls with semicolon\n"
78
- ```
79
- This can be added above the User Query to "allow" the model to use parallel calls, otherwise, the model will focus on nested and single calls primarily.
80
 
81
  ### Quickstart
82
  You can run the model on a GPU using the following code.
@@ -147,9 +118,6 @@ Please follow this prompting template to maximize the performance of RavenV2.
147
 
148
  [If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip)
149
 
150
- ### Using With LangChain
151
-
152
- We've also included a [small demo for using Raven with langchain](langdemo.py)!
153
 
154
  ## Evaluation
155
 
@@ -166,7 +134,7 @@ For a deeper dive into the results, please see our [Github README](https://githu
166
  3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior.
167
 
168
  ## License
169
- This model was trained on commercially viable data and is licensed under the [Nexusflow community license](https://huggingface.co/Nexusflow/NexusRaven-V2-13B/blob/main/LICENSE.txt).
170
 
171
 
172
  ## References
@@ -195,4 +163,4 @@ We thank the CodeLlama team for their amazing models!
195
  ```
196
 
197
  ## Contact
198
- Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!
 
1
  ---
2
+ license: llama2
3
  base_model: codellama/CodeLlama-13b-Instruct-hf
4
  model-index:
5
  - name: NexusRaven-13B
6
  results: []
 
 
7
  ---
8
  # NexusRaven-13B: Surpassing GPT-4 for Zero-shot Function Calling
9
  <p align="center">
 
35
 
36
  ## NexusRaven-V2 model usage
37
 
38
+ NexusRaven-V2 accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call.
 
 
 
 
 
 
39
 
40
  ### NexusRaven-V2's Capabilities
41
 
 
43
 
44
  ### Quick Start Prompting Guide
45
 
46
+ Please refer to our notebook, [How-To-Prompt.ipynb](How-To-Prompt.ipynb), for more advanced tutorials on using NexusRaven-V2!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
+ 1. We strongly recommend to set sampling to False when prompting NexusRaven-V2.
49
+ 2. We strongly recommend a very low temperature (~0.001).
50
+ 3. We strongly recommend following the prompting style below.
 
 
51
 
52
  ### Quickstart
53
  You can run the model on a GPU using the following code.
 
118
 
119
  [If you currently have a workflow that is built around OpenAI's function calling and you want to try NexusRaven-V2, we have a package that helps you drop in NexusRaven-V2.](https://github.com/nexusflowai/nexusraven-pip)
120
 
 
 
 
121
 
122
  ## Evaluation
123
 
 
134
  3. The explanations generated by NexusRaven-V2 might be incorrect. Please ensure proper guardrails are present to capture errant behavior.
135
 
136
  ## License
137
+ This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co/codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf/) model.
138
 
139
 
140
  ## References
 
163
  ```
164
 
165
  ## Contact
166
+ Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!
langdemo.py DELETED
@@ -1,148 +0,0 @@
1
- from typing import List, Literal, Union
2
-
3
- import math
4
-
5
- from langchain.tools.base import StructuredTool
6
- from langchain.agents import (
7
- Tool,
8
- AgentExecutor,
9
- LLMSingleActionAgent,
10
- AgentOutputParser,
11
- )
12
- from langchain.schema import AgentAction, AgentFinish, OutputParserException
13
- from langchain.prompts import StringPromptTemplate
14
- from langchain.llms import HuggingFaceTextGenInference
15
- from langchain.chains import LLMChain
16
-
17
-
18
- ##########################################################
19
- # Step 1: Define the functions you want to articulate. ###
20
- ##########################################################
21
-
22
-
23
- def calculator(
24
- input_a: float,
25
- input_b: float,
26
- operation: Literal["add", "subtract", "multiply", "divide"],
27
- ):
28
- """
29
- Computes a calculation.
30
-
31
- Args:
32
- input_a (float) : Required. The first input.
33
- input_b (float) : Required. The second input.
34
- operation (string): The operation. Choices include: add to add two numbers, subtract to subtract two numbers, multiply to multiply two numbers, and divide to divide them.
35
- """
36
- match operation:
37
- case "add":
38
- return input_a + input_b
39
- case "subtract":
40
- return input_a - input_b
41
- case "multiply":
42
- return input_a * input_b
43
- case "divide":
44
- return input_a / input_b
45
-
46
-
47
- def cylinder_volume(radius, height):
48
- """
49
- Calculate the volume of a cylinder.
50
-
51
- Parameters:
52
- - radius (float): The radius of the base of the cylinder.
53
- - height (float): The height of the cylinder.
54
-
55
- Returns:
56
- - float: The volume of the cylinder.
57
- """
58
- if radius < 0 or height < 0:
59
- raise ValueError("Radius and height must be non-negative.")
60
-
61
- volume = math.pi * (radius**2) * height
62
- return volume
63
-
64
-
65
- #############################################################
66
- # Step 2: Let's define some utils for building the prompt ###
67
- #############################################################
68
-
69
-
70
- RAVEN_PROMPT = """
71
- {raven_tools}
72
- User Query: {input}<human_end>
73
-
74
- """
75
-
76
-
77
-
78
- # Set up a prompt template
79
- class RavenPromptTemplate(StringPromptTemplate):
80
- # The template to use
81
- template: str
82
- # The list of tools available
83
- tools: List[Tool]
84
-
85
- def format(self, **kwargs) -> str:
86
- prompt = ""
87
- for tool in self.tools:
88
- func_signature, func_docstring = tool.description.split(" - ", 1)
89
- prompt += f'\nFunction:\ndef {func_signature}\n"""\n{func_docstring}\n"""\n'
90
- kwargs["raven_tools"] = prompt
91
- return self.template.format(**kwargs).replace("{{", "{").replace("}}", "}")
92
-
93
-
94
- class RavenOutputParser(AgentOutputParser):
95
- def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
96
- # Check if agent should finish
97
- if "Call:" in llm_output:
98
- return AgentFinish(
99
- return_values={
100
- "output": llm_output.strip()
101
- .replace("Call:", "")
102
- .strip()
103
- },
104
- log=llm_output,
105
- )
106
- else:
107
- raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
108
-
109
-
110
- ##################################################
111
- # Step 3: Build the agent with these utilities ###
112
- ##################################################
113
-
114
-
115
- inference_server_url = "https://rjmy54al17scvxjr.us-east-1.aws.endpoints.huggingface.cloud"
116
- assert (
117
- inference_server_url is not "<YOUR ENDPOINT URL>"
118
- ), "Please provide your own HF inference endpoint URL!"
119
-
120
- llm = HuggingFaceTextGenInference(
121
- inference_server_url=inference_server_url,
122
- temperature=0.001,
123
- max_new_tokens=400,
124
- do_sample=False,
125
- )
126
- tools = [
127
- StructuredTool.from_function(calculator),
128
- StructuredTool.from_function(cylinder_volume),
129
- ]
130
- raven_prompt = RavenPromptTemplate(
131
- template=RAVEN_PROMPT, tools=tools, input_variables=["input"]
132
- )
133
- llm_chain = LLMChain(llm=llm, prompt=raven_prompt)
134
- output_parser = RavenOutputParser()
135
- agent = LLMSingleActionAgent(
136
- llm_chain=llm_chain,
137
- output_parser=output_parser,
138
- stop=["<bot_end>"],
139
- allowed_tools=tools,
140
- )
141
- agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
142
-
143
- call = agent_chain.run(
144
- "I have a cake that is about 3 centimenters high and 200 centimeters in radius. How much cake do I have?"
145
- )
146
- print(eval(call))
147
- call = agent_chain.run("What is 1+10?")
148
- print(eval(call))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14c0ff1fa640063c6084b6513fe35122dc5625f29b9af8317ee2c0a8444c7216
3
+ size 9948933792
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f995c96612e274a416caa74e11718a5e7a514023357d93b24139e36e91fe8d0
3
+ size 9904123752
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f94fd7c010d2b7c0c801c3dd94cd3fadbe626c2076bec0cf6e3b465a41053867
3
+ size 6179204888
model.safetensors.index.json ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 26032220160
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00003-of-00003.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00003.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00003.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00003.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00003.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00003.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00003.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00002-of-00003.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00002-of-00003.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00002-of-00003.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00002-of-00003.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00002-of-00003.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00003.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00003.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00003.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00003.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00003.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00002-of-00003.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00002-of-00003.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
242
+ "model.layers.32.input_layernorm.weight": "model-00003-of-00003.safetensors",
243
+ "model.layers.32.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
244
+ "model.layers.32.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
245
+ "model.layers.32.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
246
+ "model.layers.32.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
247
+ "model.layers.32.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
248
+ "model.layers.32.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
249
+ "model.layers.32.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
250
+ "model.layers.32.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
251
+ "model.layers.33.input_layernorm.weight": "model-00003-of-00003.safetensors",
252
+ "model.layers.33.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
253
+ "model.layers.33.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
254
+ "model.layers.33.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
255
+ "model.layers.33.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
256
+ "model.layers.33.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
257
+ "model.layers.33.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
258
+ "model.layers.33.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
259
+ "model.layers.33.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
260
+ "model.layers.34.input_layernorm.weight": "model-00003-of-00003.safetensors",
261
+ "model.layers.34.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
262
+ "model.layers.34.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
263
+ "model.layers.34.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
264
+ "model.layers.34.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
265
+ "model.layers.34.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
266
+ "model.layers.34.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
267
+ "model.layers.34.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
268
+ "model.layers.34.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
269
+ "model.layers.35.input_layernorm.weight": "model-00003-of-00003.safetensors",
270
+ "model.layers.35.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
271
+ "model.layers.35.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
272
+ "model.layers.35.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
273
+ "model.layers.35.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
274
+ "model.layers.35.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
275
+ "model.layers.35.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
276
+ "model.layers.35.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
277
+ "model.layers.35.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
278
+ "model.layers.36.input_layernorm.weight": "model-00003-of-00003.safetensors",
279
+ "model.layers.36.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
280
+ "model.layers.36.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
281
+ "model.layers.36.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
282
+ "model.layers.36.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
283
+ "model.layers.36.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
284
+ "model.layers.36.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
285
+ "model.layers.36.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
286
+ "model.layers.36.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
287
+ "model.layers.37.input_layernorm.weight": "model-00003-of-00003.safetensors",
288
+ "model.layers.37.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
289
+ "model.layers.37.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
290
+ "model.layers.37.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
291
+ "model.layers.37.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
292
+ "model.layers.37.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
293
+ "model.layers.37.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
294
+ "model.layers.37.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
295
+ "model.layers.37.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
296
+ "model.layers.38.input_layernorm.weight": "model-00003-of-00003.safetensors",
297
+ "model.layers.38.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
298
+ "model.layers.38.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
299
+ "model.layers.38.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
300
+ "model.layers.38.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
301
+ "model.layers.38.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
302
+ "model.layers.38.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
303
+ "model.layers.38.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
304
+ "model.layers.38.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
305
+ "model.layers.39.input_layernorm.weight": "model-00003-of-00003.safetensors",
306
+ "model.layers.39.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
307
+ "model.layers.39.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
308
+ "model.layers.39.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
309
+ "model.layers.39.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
310
+ "model.layers.39.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
311
+ "model.layers.39.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
312
+ "model.layers.39.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
313
+ "model.layers.39.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
314
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
315
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
316
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
317
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
318
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
319
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
320
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
321
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
322
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
323
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00003.safetensors",
324
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
325
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
326
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
327
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
328
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
329
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
330
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
331
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
332
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00003.safetensors",
333
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
334
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
335
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
336
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
337
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
338
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
339
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
340
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
341
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00003.safetensors",
342
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
343
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
344
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
345
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
346
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
347
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
348
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
349
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
350
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00003.safetensors",
351
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
352
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
353
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
354
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
355
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
356
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
357
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
358
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
359
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00003.safetensors",
360
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
361
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
362
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
363
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
364
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
365
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
366
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
367
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
368
+ "model.norm.weight": "model-00003-of-00003.safetensors"
369
+ }
370
+ }