trelis_voice / output.txt
rs545837's picture
Upload folder using huggingface_hub
0175bf7 verified
trelis_0.wav|I'm going to walk you through 10 quick tips for fine tuning. For each of those, I'll point you to one or two trellis videos on YouTube and also point you to the right branch if you're working out of the trellis advanced fine tuning repository. Tip number one is to start with a small model. I recommend starting with something like Lama 3 8B or Phi 3 Mini.|1
trelis_1.wav|This one, I use a relatively small model as I do in many of my fine tuning tutorials, just because it's quicker to learn fast. Tip number two is to use LoRa or QLoRa. I don't recommend starting off with full fine-tuning for a few reasons. First of all, LoRa and QLoRa allow you to start with fewer GPUs or a smaller GPU. That's going to make iteration faster.|1
trelis_2.wav|So you want to create 10 question answer pairs and use those to choose which base model is going to perform best. So just by running those on different base models, you can see which one is going to give you the best baseline for starting off your fine tuning. Then after you do any training run, you want to run that manual test. and just evaluate whether the model is doing well.|1
trelis_3.wav|This gives you probably a better sense than solely looking at the eval and training loss during the fine-tuning process. This is what I do in this memorization video as well, which you can check out on YouTube, and you'll see in the memorization scripts how I allow you to set up this manual dataset. That's also possible in the unsloth branch and the multi-GPU branch, which I'll get to later.|1
trelis_4.wav|Tip number four is to create data sets manually. Yes, I know this is a bit of work, but I think it's underrated. When you manually curate a data set like I did for the trellis function calling data set, it lets you appreciate exactly which rows of data are needed to get the performance that you need. You can, of course, use Python and chat GPT to help automate some of this and generate rows.|1
trelis_5.wav|If you do want to automate a little more how you generate synthetic data sets, you can check out this video here on data set preparation with LLMs. Tip number six is always use a validation data set. If you don't have one, you can just split off 10 to 20% of your training data set. You want to be checking your training loss as you progress along the process.|1
trelis_6.wav|Then, as a very last step, you can think about moving to a larger model where it's going to take more time and money to get in that final result. There are two videos of relevance here. If you want to understand the pros and cons of full fine-tuning versus QLORA or LoRa, take a look at this video. And if you want to understand the complexities of doing multi-GPU training, check out multi-GPU fine-tuning.|1
trelis_7.wav|Moving to two last tips, tip number nine is to use unsupervised fine tuning. This can be useful if you have a large data set. I'm going to say larger than 10,000 rows of data. Here, you'll need to use Python scripts in order to clean up, say, repeated characters or too much new lines. You can also use language models in order to clean up the data set chunk by chunk.|1
trelis_8.wav|The video of relevance here is the Wikipedia video I made, where I first extract data from Wikipedia, clean it, and then use it for fine tuning. Last of all, my tip number 10 is to do preference fine-tuning. This is where you have a data set with chosen, which are better or preferred responses, and rejected, which are the responses to the same prompts but are of lower quality.|1
trelis_9.wav|The preference fine-tuning will move your model to give responses more like your chosen answers and less like your rejected answers, which is useful if you want to do some fine-tuning for tone or style, or if you want to make some corrections where the model's giving a response you don't quite like. Here I recommend the Orpo YouTube video, and there's also a branch by that name in Advanced Fine Tuning.|1
trelis_10.wav|Orpo is also supported in the Unslot branch, where there's a Python Jupyter notebook and also just a Python.py script you can run. And Orpo is supported as an option in the Multi-GPU branch too. So to recap these 10 tips, start with a small model, use LoRa or QLoRa, not full fine-tuning. Always create 10 manual test questions or maybe a few more. Remember that manual data sets are probably underrated.|1
trelis_11.wav|You can always get a little bit of help from Python or from chat GPT. Start training on a small number of rows, even just one row to test the pipeline, but then 100, and make sure it's having a good effect before you decide to scale up. Make sure you know that the data type and the data set that you've set up is actually the right one.|1
trelis_12.wav|Number six, always use a validation set. Just split one off from a training set if you don't have one. Number seven, try to just start training on one GPU. Number eight, use weights and biases for tracking. And when you're scaling from small to large, increase first the rows, then move to using more VRAM with LoRa instead of QLoRa or full fine tuning instead of LoRa.|1
trelis_13.wav|By the way, there's a factor of four roughly in VRAM difference between each of those. So LoRa is about four times QLoRa and full fine tuning is about four times. LoRa, or even more in some cases. And last of all, increase to a larger model size only at the very end of your training process when you think you have a pipeline that's working well.|1
trelis_14.wav|Now, this approach here I've talked about for language models, but it also works for video and speech or images, multimodal models. So you can check out this video here on multimodal text plus image, where I prepare a data set and bring it through fine tuning. And likewise, for this speech to text model, where I prepare a data set and bring it through fine tuning.|1
trelis_15.wav|There are specific repos for multimodal. That's the vision repository here. And there's a repo for transcription. And this LLMs repo is the advanced fine-tuning repo I've been talking to date in or up until now in this presentation. I've laid out here all of the playlists that are relevant depending on what you need. So there are four different sections, four playlists and four repositories that go with them.|1
trelis_16.wav|This very last section of the video is for those who have purchased lifetime access to one of the Trellis repositories, but I'll just put it part of this public video because it will give a sense of what's in these repositories for those of you who might be interested to purchase lifetime membership later. The first repo is the advanced fine-tuning repo, and this is split into branches according to function.|1
trelis_17.wav|Now, the notebook is recommended if you want to go through the training the first time, you can see step by step what's happening and easily print out things at intermediate points. But when you've got your script honed, it can be a lot faster to run a Python script. And that's why I have made this script available, which you just run from the command line and it will go through everything within the training.|1
trelis_18.wav|Just to give you a sense of how you configure the training and test setup, you'll set a model slug. You will then set some parameters, like whether you want to fine tune in 4-bit, what data type you want to use, depending on your GPU. You can then choose a data set, say for function calling, or if you want to memorize some data, like on the rules of TouchRugby.|1
trelis_19.wav|Check out the live stream video on choosing LoRa parameters if you want to know more. You can set LoRa or LoRa alpha and also rank stabilize LoRa, set that to true or false. Here you've got some Weights and Biases project configurations. You set the project name, and then for each run, you can use a different name here for running in Weights and Biases.|1
trelis_20.wav|And this can be useful if your answers are quite short and you don't want the loss on all of the prompts to kind of crowd out or cloud out the information or the signal that's coming from training on the response or the answer. So you set the completions to true here. Sometimes I use this for function calling, fine tuning. And then you need to let the model know where your answer is starting.|1
trelis_21.wav|the number of epochs, the learning rate, an output directory for your training model and results, whether you want to train with BrainFloat 16 or not. You can set your scheduler. You can decide whether to save the model at a certain number of steps of training. set your max sequence length, gradient checkpointing, and whether to use re-entrancy, which allows you to speed up the training.|1
trelis_22.wav|Next, you can decide whether you want to use ORPO or not. By default, I've got that set to false. If you're using ORPO, you need a column that's called chosen and one called rejected. and you can set your max prompt length and then the beta. The beta basically weighs how much of the preference fine-tuning, what's the importance of that loss relative to the standard SFT loss.|1
trelis_23.wav|It will set up the tokenizer, set up the chat template, load the dataset, either from your manual data that's in the repo or from Hugging Face, and then it will run inference through all of those samples and print the results out to file. Just as an example, I can show you within test output, you'll see here a large number of tests that I have run.|1
trelis_24.wav|It has the test.py and the train.py file that will allow you to run testing and training. And I'll just briefly show you the config file. So at the start here, you'll see this parameter that's not in the unsloth branch. If you set it to auto, it will just do standard training. You can train on multiple GPUs, but it will be pipeline parallel, so not quite as efficient.|1
trelis_25.wav|Then you can fine tune Lava, IdaFix and, or IdaFix and Moondream models. You can do multimodal server setup with text generation inference. There's a one-click template for running an IdaFix server, including on a custom model. And last of all, there is a script for fine-tuning multimodal text plus video models. This is basically a variation on text plus image models where you split the video into multiple images.|1
trelis_26.wav|So the idea is to use a very fast and relatively small language model to pick out the right snippets and then include those snippets in the context of a more powerful model like, say, GPT-4. There's also a folder now on privacy, which allows you to basically hide information, like personal information on credit cards, names, email addresses, before you send it to a third-party API so that you can reduce any data privacy risks.|1
trelis_27.wav|Last of all, there's the advanced transcription repository. This one here allows you to generate data if you want to fine tune a whisper model and then do the fine tuning. And again, much of the 10 tips that I provided earlier are going to apply here for transcription. And that is it for my 10 tips on fine-tuning. If I've left anything out, please let me know below in the comments and I'll get back to you.|1