TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
Deepmoney 34B 200K Base - GPTQ
- Model creator: triad party
- Original model: Deepmoney 34B 200K Base
Description
This repo contains GPTQ model files for triad party's Deepmoney 34B 200K Base.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by Massed Compute.
Repositories available
- AWQ model(s) for GPU inference.
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
- triad party's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: None
{prompt}
Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
This may not be a complete list; if you know of others, please let me know!
Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
Explanation of GPTQ parameters
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as
desc_act
. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
---|---|---|---|---|---|---|---|---|---|
main | 4 | None | Yes | 0.1 | VMware Open Instruct | 8192 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
gptq-4bit-128g-actorder_True | 4 | 128 | Yes | 0.1 | VMware Open Instruct | 8192 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
gptq-4bit-32g-actorder_True | 4 | 32 | Yes | 0.1 | VMware Open Instruct | 8192 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
gptq-3bit-128g-actorder_True | 3 | 128 | Yes | 0.1 | VMware Open Instruct | 8192 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | VMware Open Instruct | 8192 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
gptq-3bit-32g-actorder_True | 3 | 32 | Yes | 0.1 | VMware Open Instruct | 8192 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | VMware Open Instruct | 8192 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
How to download, including from branches
In text-generation-webui
To download from the main
branch, enter TheBloke/deepmoney-34b-200k-base-GPTQ
in the "Download model" box.
To download from another branch, add :branchname
to the end of the download name, eg TheBloke/deepmoney-34b-200k-base-GPTQ:gptq-4bit-128g-actorder_True
From the command line
I recommend using the huggingface-hub
Python library:
pip3 install huggingface-hub
To download the main
branch to a folder called deepmoney-34b-200k-base-GPTQ
:
mkdir deepmoney-34b-200k-base-GPTQ
huggingface-cli download TheBloke/deepmoney-34b-200k-base-GPTQ --local-dir deepmoney-34b-200k-base-GPTQ --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
mkdir deepmoney-34b-200k-base-GPTQ
huggingface-cli download TheBloke/deepmoney-34b-200k-base-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir deepmoney-34b-200k-base-GPTQ --local-dir-use-symlinks False
More advanced huggingface-cli download usage
If you remove the --local-dir-use-symlinks False
parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: ~/.cache/huggingface
), and symlinks will be added to the specified --local-dir
, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the HF_HOME
environment variable, and/or the --cache-dir
parameter to huggingface-cli
.
For more documentation on downloading with huggingface-cli
, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer
:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER
to 1
:
mkdir deepmoney-34b-200k-base-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/deepmoney-34b-200k-base-GPTQ --local-dir deepmoney-34b-200k-base-GPTQ --local-dir-use-symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER=1
before the download command.
With git
(not recommended)
To clone a specific branch with git
, use a command like this:
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/deepmoney-34b-200k-base-GPTQ
Note that using Git with HF repos is strongly discouraged. It will be much slower than using huggingface-hub
, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the .git
folder as a blob.)
How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui.
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
Click the Model tab.
Under Download custom model or LoRA, enter
TheBloke/deepmoney-34b-200k-base-GPTQ
.- To download from a specific branch, enter for example
TheBloke/deepmoney-34b-200k-base-GPTQ:gptq-4bit-128g-actorder_True
- see Provided Files above for the list of branches for each option.
- To download from a specific branch, enter for example
Click Download.
The model will start downloading. Once it's finished it will say "Done".
In the top left, click the refresh icon next to Model.
In the Model dropdown, choose the model you just downloaded:
deepmoney-34b-200k-base-GPTQ
The model will automatically load, and is now ready for use!
If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file
quantize_config.json
.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file
Once you're ready, click the Text Generation tab and enter a prompt to get started!
Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: ghcr.io/huggingface/text-generation-inference:1.1.0
Example Docker parameters:
--model-id TheBloke/deepmoney-34b-200k-base-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
pip3 install huggingface-hub
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
Python code example: inference from this GPTQ model
Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
Example Python code
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/deepmoney-34b-200k-base-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
ExLlama is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute
Thanks to the chirper.ai team!
Thanks to Clay from gpus.llm-utils.org!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: triad party's Deepmoney 34B 200K Base
Deepmoney
Introducing Greed in the Seven Deadly Sins series of models.
- Full-para pre-training on Yi-34b
- High-quality research reports
- High-end cleaning process
1. What do I want to do?
Most of the current so-called financial models are mostly trained on public knowledge, but in the actual financial field, these public knowledge are often seriously insufficient for the current market interpretability. If you are interested, you can learn about the various propositions of Keynes, Friedman and even current behavioral finance. According to my observation, most financial models cannot make investment judgments. Because they are trained on ordinary textbooks, entry-level analyst exams, and even company public reports. I think this is of very little value for the investment.
You can think I'm joking, but the fact is that the logic of many subjective analysts may not be as rigorous as that of large models of 34b and above (excluding those excellent ones, of course). The market is changing every moment, with a lot of news and massive data in real time. For most retail investors, instead of waiting for a crappy analyst to write a report, why not use a large model to make a pipeline?
In my plan, this model is the base model of this process. In my plan, models such as information collector, target judge, qualitative analyst, quantitative analyst, and data extractor are all part of this process. . But the model itself is undoubtedly important to master a large number of qualitative and quantitative methods. That's why this model was born.
2. About the data
As I just said, a lot of public knowledge has some questionable validity - but that doesn't mean it's wrong. The theoretical support behind many research methods in research reports also relies on this knowledge. So in my training, I picked up some college textbooks and some professional books. Not a lot of quantity but good quality. In addition, I selected a large number of research report data from 2019 to December 2023 - these reports are issued by a variety of publishers, including traditional brokers and professional research institutions. Most of them are paid and only available to institutions. But I got them anyway through various means.
If you have read research reports, especially high-quality ones, you will find that research reports are all subjective judgment + quantitative analysis, and data support in quantitative analysis is crucial to the entire logical chain. In order to extract this data (most of them are in the form of graphs or tables), I tried a lot of multi-modal models, and the process was very painful. The conclusion is that cog-agent and emu2 are very effective for this kind of tasks. In order to better extract information, I created a process that summarizes the context of research reports as part of the prompt.
Finally, I made a blend of the data. General data is not included because it is just for greed. Moreover, the knowledge in industry research reports is comprehensive enough.
3. About training
Raw text, full parameter training. The base uses long context yi-34b-200k. This is necessary to complete and understand an in-depth report.
Of course, I also did a sft. This is the analyzer in my process – I haven’t broken down the qualitative and quantitative analysis yet, but I’m already blown away by how well it works.
More:
More technical details and evals coming soon……
1. 我想干什么?
当下大多数所谓的金融模型大多在公开知识上进行训练,但在实际的金融领域,这些公开知识对当前的市场可解释性往往严重不足。如果您感兴趣,可以了解一下凯恩斯、弗里德曼乃至当下行为金融学的各类主张。而据我观察,大多数金融模型无法对投资进行判断。因为它们都是在普通的教材、入门的分析师考试,乃至公司的公开报告上训练。我认为这对于投资的价值非常小。 你可以当我开玩笑,但事实是很多主观分析师的逻辑性可能还不如34b及以上的大模型来的严谨(当然不包括那些优秀的)。而每时每刻,市场都在变化,大量的新闻,海量的数据都是实时的,对于大多数散户们,与其等待蹩脚的分析师写出报告,为什么不用大模型制作一套pipeline呢? 在我的计划中,该模型是这套流程的基座模型,在我的计划中,信息搜集者、标的判断者、定性分析者定性分析者、定量分析者、数据提取者等模型都是该流程的一部分。但模型本身掌握大量的定性和定量方法毫无疑问是重要的。这就是这个模型诞生的理由。
2. 关于数据:
正如我刚才所说,很多公开知识的有效性都有些问题——但这并不意味着它们是错误的。在研报中很多研究方法背后的理论支撑也依赖这些知识。所以在我的训练中,我挑选了一些大学教材和一些专业书籍。数量不是很多但质量还不错。另外,我挑选了在2019-2023年12月的大量研究报告数据——这些报告的发布者多种多样,有传统的broke,也有专业研究机构。他们中的大多数是付费的,而且只对机构提供。但无论如何我通过各种各样的手段获取了它们。
如果你看过研报,尤其是高质量的那些,你会发现研报都是主观判断+定量分析,而定量分析中的数据支撑对于整个逻辑链条至关重要。为了提取这些数据(他们中的大多数以图形或者表格的形式出现),我尝试了很多多模态模型,过程非常痛苦,结论是cog-agent和emu2对于这类任务很有效。为了更好的提取信息,我制作了一套从研报上下文总结作为prompt一部分的流程。
最后,我把这些数据做了一个混合。并没有放入通识数据, 因为它就是为了greed而生的。而且行业研报中的知识足够全。
3:关于训练:
raw text,全参数训练。基座采用了长上下文的yi-34b-200k。这对于完成理解一篇深度报告是必须的。
当然,我也做了一次sft。这是我的流程中的分析者——目前还没有细分定性和定量分析,但它的效果已经让我大吃一惊了。
- Downloads last month
- 17
Model tree for TheBloke/deepmoney-34b-200k-base-GPTQ
Base model
TriadParty/deepmoney-34b-200k-base